58 research outputs found

    Spontaneous vs. posed facial behavior: automatic analysis of brow actions

    Get PDF
    Past research on automatic facial expression analysis has focused mostly on the recognition of prototypic expressions of discrete emotions rather than on the analysis of dynamic changes over time, although the importance of temporal dynamics of facial expressions for interpretation of the observed facial behavior has been acknowledged for over 20 years. For instance, it has been shown that the temporal dynamics of spontaneous and volitional smiles are fundamentally different from each other. In this work, we argue that the same holds for the temporal dynamics of brow actions and show that velocity, duration, and order of occurrence of brow actions are highly relevant parameters for distinguishing posed from spontaneous brow actions. The proposed system for discrimination between volitional and spontaneous brow actions is based on automatic detection of Action Units (AUs) and their temporal segments (onset, apex, offset) produced by movements of the eyebrows. For each temporal segment of an activated AU, we compute a number of mid-level feature parameters including the maximal intensity, duration, and order of occurrence. We use Gentle Boost to select the most important of these parameters. The selected parameters are used further to train Relevance Vector Machines to determine per temporal segment of an activated AU whether the action was displayed spontaneously or volitionally. Finally, a probabilistic decision function determines the class (spontaneous or posed) for the entire brow action. When tested on 189 samples taken from three different sets of spontaneous and volitional facial data, we attain a 90.7 % correct recognition rate. Categories and Subject Descriptors I.2.10 [Vision and Scene Understanding]: motion, modeling and recovery of physical attribute

    Distinguishing Posed and Spontaneous Smiles by Facial Dynamics

    Full text link
    Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either `spontaneous' or `posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial Behavior Analysi

    Darwin's Duchenne: Eye constriction during infant joy and distress

    Get PDF
    Darwin proposed that smiles with eye constriction (Duchenne smiles) index strong positive emotion in infants, while cry-faces with eye constriction index strong negative emotion. Research has supported Darwin's proposal with respect to smiling, but there has been little parallel research on cry-faces (open-mouth expressions with lateral lip stretching). To investigate the possibility that eye constriction indexes the affective intensity of positive and negative emotions, we first conducted the Face-to-Face/Still-Face (FFSF) procedure at 6 months. In the FFSF, three minutes of naturalistic infant-parent play interaction (which elicits more smiles than cry-faces) are followed by two minutes in which the parent holds an unresponsive still-face (which elicits more cry-faces than smiles). Consistent with Darwin's proposal, eye constriction was associated with stronger smiling and with stronger cry-faces. In addition, the proportion of smiles with eye constriction was higher during the positive-emotion eliciting play episode than during the still-face. In parallel, the proportion of cry-faces with eye constriction was higher during the negative-emotion eliciting still-face than during play. These results are consonant with the hypothesis that eye constriction indexes the affective intensity of both positive and negative facial configurations. A preponderance of eye constriction during cry-faces was observed in a second elicitor of intense negative emotion, vaccination injections, at both 6 and 12 months of age. The results support the existence of a Duchenne distress expression that parallels the more well-known Duchenne smile. This suggests that eye constriction-the Duchenne marker-has a systematic association with early facial expressions of intense negative and positive emotion. © 2013 Mattson et al

    The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    Get PDF
    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

    Human and machine validation of 14 databases of dynamic facial expressions

    Get PDF
    With a shift in interest toward dynamic expressions, numerous corpora of dynamic facial stimuli have been developed over the past two decades. The present research aimed to test existing sets of dynamic facial expressions (published between 2000 and 2015) in a cross-corpus validation effort. For this, 14 dynamic databases were selected that featured facial expressions of the basic six emotions (anger, disgust, fear, happiness, sadness, surprise) in posed or spontaneous form. In Study 1, a subset of stimuli from each database (N = 162) were presented to human observers and machine analysis, yielding considerable variance in emotion recognition performance across the databases. Classification accuracy further varied with perceived intensity and naturalness of the displays, with posed expressions being judged more accurately and as intense, but less natural compared to spontaneous ones. Study 2 aimed for a full validation of the 14 databases by subjecting the entire stimulus set (N = 3812) to machine analysis. A FACS-based Action Unit (AU) analysis revealed that facial AU configurations were more prototypical in posed than spontaneous expressions. The prototypicality of an expression in turn predicted emotion classification accuracy, with higher performance observed for more prototypical facial behavior. Furthermore, technical features of each database (i.e., duration, face box size, head rotation, and motion) had a significant impact on recognition accuracy. Together, the findings suggest that existing databases vary in their ability to signal specific emotions, thereby facing a trade-off between realism and ecological validity on the one end, and expression uniformity and comparability on the other

    Perception of Dynamic Facial Expressions of Emotion

    No full text

    The reality of recovered memories: corroborating continuous and discontinuous memories of childhood sexual abuse

    No full text
    Although controversy surrounds the relative authenticity of discontinuous versus continuous memories of childhood sexual abuse (CSA), little is known about whether such memories differ in their likelihood of corroborative evidence. Individuals reporting CSA memories were interviewed, and two independent raters attempted to find corroborative information for the allegations. Continuous CSA memories and discontinuous memories that were unexpectedly recalled outside therapy were more likely to be corroborated than anticipated discontinuous memories recovered in therapy. Evidence that suggestion during therapy possibly mediates these differences comes from the additional finding that individuals who recalled the memories outside therapy were markedly more surprised at the existence of their memories than were individuals who initially recalled the memories in therapy. These results indicate that discontinuous CSA memories spontaneously retrieved outside of therapy may be accurate, while implicating expectations arising from suggestions during therapy in producing false CSA memories

    The painful face - Pain expression recognition using active appearance models

    No full text
    10.1016/j.imavis.2009.05.007Image and Vision Computing27121788-1796IVCO
    corecore